14 research outputs found

    Data Leakage and Evaluation Issues in Micro-Expression Analysis

    Full text link
    Micro-expressions have drawn increasing interest lately due to various potential applications. The task is, however, difficult as it incorporates many challenges from the fields of computer vision, machine learning and emotional sciences. Due to the spontaneous and subtle characteristics of micro-expressions, the available training and testing data are limited, which make evaluation complex. We show that data leakage and fragmented evaluation protocols are issues among the micro-expression literature. We find that fixing data leaks can drastically reduce model performance, in some cases even making the models perform similarly to a random classifier. To this end, we go through common pitfalls, propose a new standardized evaluation protocol using facial action units with over 2000 micro-expression samples, and provide an open source library that implements the evaluation protocols in a standardized manner. Code will be available in \url{https://github.com/tvaranka/meb}

    Graph-based Facial Affect Analysis: A Review of Methods, Applications and Challenges

    Full text link
    Facial affect analysis (FAA) using visual signals is important in human-computer interaction. Early methods focus on extracting appearance and geometry features associated with human affects, while ignoring the latent semantic information among individual facial changes, leading to limited performance and generalization. Recent work attempts to establish a graph-based representation to model these semantic relationships and develop frameworks to leverage them for various FAA tasks. In this paper, we provide a comprehensive review of graph-based FAA, including the evolution of algorithms and their applications. First, the FAA background knowledge is introduced, especially on the role of the graph. We then discuss approaches that are widely used for graph-based affective representation in literature and show a trend towards graph construction. For the relational reasoning in graph-based FAA, existing studies are categorized according to their usage of traditional methods or deep models, with a special emphasis on the latest graph neural networks. Performance comparisons of the state-of-the-art graph-based FAA methods are also summarized. Finally, we discuss the challenges and potential directions. As far as we know, this is the first survey of graph-based FAA methods. Our findings can serve as a reference for future research in this field.Comment: 20 pages, 12 figures, 5 table

    Machine learning for perceiving facial micro-expression

    No full text
    Abstract Emotion analysis plays an important role in humans’ daily lives. Facial expression is one of the major ways to express emotions. Besides the common facial expressions we see every day, emotion can also be expressed in a special format, micro-expression. Micro-expressions (MEs) are involuntary facial movements that come about in reaction to emotional stimulus, which reveal people’s hidden feelings in high-stakes situations and have many potential applications, such as clinical diagnosis, ensuring national security, and conducting interrogations. However, ME recognition becomes challenging due to the low intensity, short duration and small-scale datasets. This thesis is a through summary of the important subjects for ME recognition, consisting of five papers corresponding to the progress of my research. Firstly, the automatic ME recognition system based on deep learning is introduced. Secondly, the Micro-expression Action Unit (ME-AU) detection is described, which plays an important role in facial behavior analysis. Thirdly, the robust ME recognition with AU detection is illustrated that verifies the contribution of AU detection to ME recognition. The contributions of this study can be classified into three categories: (1) A deep ME recognition approach with the apex frame is proposed, which would be capable of demonstrating that deep learning can achieve impressive performance of ME recognition with the apex frame; (2) We break the ground of the ME-AU study and provide the baselines and novel transfer learning methods for the future study of ME-AU detection; (3) A unified framework for ME recognition with AU detection based on contrastive learning is proposed for verifying the AU contribution to robust ME recognition. Lastly, we summarize the contributions of the work, and propose future plans about ME studies based on the limitations of the current work.Tiivistelmä Tunneanalyysillä on tärkeä rooli ihmisen jokapäiväisessä elämässä. Kasvojen ilmeet ovat yksi tärkeimmistä tavoista ilmaista tunteita. Arjen tavallisten ilmeiden lisäksi tunteet voidaan ilmaista myös erityisellä tavalla, mikroilmeillä. Mikroilmeet ovat tahattomia kasvojen liikkeitä, jotka emotionaaliset ärsykkeet aiheuttavat. Mikroilmeet paljastavat ihmisten piileviä tunteita kovan paineen tilanteissa ja niitä voidaan käyttää eri sovelluksissa, kuten kliinisessä diagnoosissa sekä kansalliseen turvallisuuteen ja kuulusteluihin liittyvissä tilanteissa. Mikroilmeiden tunnistus on kuitenkin haastavaa alhaisen intensiteetin, lyhyen keston ja pienten datajoukkojen vuoksi. Tämä opinnäytetyö on kattava yhteenveto mikroilmeiden tunnistuksen kannalta tärkeistä aiheista, ja se koostuu viidestä tutkimukseni vastaavasta artikkelista. Ensimmäiseksi otetaan käyttöön syväoppimiseen perustuva automaattinen mikroilmeentunnistusjärjestelmä. Toiseksi esitellään mikroilmeiden aktioyksikkö tunnistus, jolla on tärkeä rooli kasvojen käyttäytymisen analysoinnissa. Kolmanneksi esitetään robusti mikroilmeiden tunnistus aktioyksikköjen avulla, joka vahvistaa aktioyksikköjen tuloksen mikroilmeiden tunnistukseen. Tämän tutkimuksen tulokset voidaan luokitella kolmeen osaan: (1) Mikroilmeiden tunnistukseen ehdotetaan perusteellista lähestymistapaa videon apeksikohdan avulla, mikä osoittaa, että syväoppiminen voi edistää mikroilmeiden tunnistusta videon apeksin ansiosta; (2) Avaamme uuden uran mikroilmeiden aktioyksikköjen tutkimukselle ja tarjoamme perustason ja uusia siirtymisoppimismenetelmiä tulevaa mikroilmeiden aktioyksikköjen tunnistusta varten; (3) Ehdotamme yhtenäistä kehystä, jolla mikroilmeitä voidaan tunnistaa aktioyksikköjen ja kontrastiivisen oppimisen avulla ja jolla voimme vahvistaa aktioyksikköjen merkityksen vahvassa mikroilmeiden tunnistuksessa. Lopuksi teemme yhteenvedon työn tuloksista ja ehdotamme tulevaisuuden suunnitelmia mikroilmeiden tutkimuksille nykyisen työn rajoitusten perusteella

    Deep Learning for Micro-expression Recognition: A Survey

    Full text link
    Micro-expressions (MEs) are involuntary facial movements revealing people's hidden feelings in high-stake situations and have practical importance in medical treatment, national security, interrogations and many human-computer interaction systems. Early methods for MER mainly based on traditional appearance and geometry features. Recently, with the success of deep learning (DL) in various fields, neural networks have received increasing interests in MER. Different from macro-expressions, MEs are spontaneous, subtle, and rapid facial movements, leading to difficult data collection, thus have small-scale datasets. DL based MER becomes challenging due to above ME characters. To date, various DL approaches have been proposed to solve the ME issues and improve MER performance. In this survey, we provide a comprehensive review of deep micro-expression recognition (MER), including datasets, deep MER pipeline, and the bench-marking of most influential methods. This survey defines a new taxonomy for the field, encompassing all aspects of MER based on DL. For each aspect, the basic approaches and advanced developments are summarized and discussed. In addition, we conclude the remaining challenges and and potential directions for the design of robust deep MER systems. To the best of our knowledge, this is the first survey of deep MER methods, and this survey can serve as a reference point for future MER research.Comment: 20 pages, 8 figure

    Intra- and inter-contrastive learning for micro-expression action unit detection

    No full text
    Abstract Encoding facial expressions via Action Units (AUs) has been found effective for resolving the ambiguity issue among different expressions. In the literature, AU detection has extensive researches in macro-expressions. However, there is limited research about AU analysis for micro-expressions (MEs). Micro-expression Action Unit (MEAU) detection becomes a challenging problem because of the subtle facial motion. To alleviate this problem, in this paper, we study the contrastive learning for modeling subtle AUs and propose a novel MEAU detection method by learning the intra- and inter-contrastive information among MEs. Through the intra-contrastive learning module, the difference between the onset and apex frames is enlarged and utilized to obtain the discriminative representation for low-intensity AU detection. In addition, considering the subtle difference between MEAUs, the inter-contrastive learning is designed to automatically explore and enlarge the difference between different AUs to enhance the MEAU detection robustness. Intensive experiments on two widely used ME databases have demonstrated the effectiveness and generalization ability of our proposed method

    Micro-expression action unit detection with dual-view attentive similarity-preserving knowledge distillation

    No full text
    Abstract Encoding facial expressions via action units (AUs) has been found to be effective in resolving the ambiguity issue among different expressions. Therefore, AU detection plays an important role for emotion analysis. While a number of AU detection methods have been proposed for common facial expressions, there is very limited study for micro-expression AU detection. Micro-expression AU detection is challenging because of the weakness of micro-expression appearance and the spontaneous characteristic leading to difficult collection, thus has small-scale datasets. In this paper, we focus on the micro-expression AU detection and expect to contribute to the community. To address above issues, a novel dual-view attentive similarity-preserving distillation method is proposed for robust micro-expression AU detection by leveraging massive facial expressions in the wild. Through such an attentive similarity-preserving distillation method, we break the domain shift problem and essential AU knowledge from common facial AUs is efficiently distilled. Furthermore, considering that the generalization ability of teacher network is important for knowledge distillation, a semi-supervised co-training approach is developed to construct a generalized teacher network for learning discriminative AU representation. Extensive experiments have demonstrated that our proposed knowledge distillation method can effectively distill and transfer the cross-domain knowledge for robust micro-expression AU detection

    Joint local and global information learning with single apex frame detection for micro-expression recognition

    No full text
    Abstract Micro-expressions (MEs) are rapid and subtle facial movements that are difficult to detect and recognize. Most recent works have attempted to recognize MEs with spatial and temporal information from video clips. According to psychological studies, the apex frame conveys the most emotional information expressed in facial expressions. However, it is not clear how the single apex frame contributes to micro-expression recognition. To alleviate that problem, this paper firstly proposes a new method to detect the apex frame by estimating pixel-level change rates in the frequency domain. With frequency information, it performs more effectively on apex frame spotting than the currently existing apex frame spotting methods based on the spatio-temporal change information. Secondly, with the apex frame, this paper proposes a joint feature learning architecture coupling local and global information to recognize MEs, because not all regions make the same contribution to ME recognition and some regions do not even contain any emotional information. More specifically, the proposed model involves the local information learned from the facial regions contributing major emotion information, and the global information learned from the whole face. Leveraging the local and global information enables our model to learn discriminative ME representations and suppress the negative influence of unrelated regions to MEs. The proposed method is extensively evaluated using CASME, CASME II, SAMM, SMIC, and composite databases. Experimental results demonstrate that our method with the detected apex frame achieves considerably promising ME recognition performance, compared with the state-of-the-art methods employing the whole ME sequence. Moreover, the results indicate that the apex frame can significantly contribute to micro-expression recognition

    From emotion AI to cognitive AI

    No full text
    Abstract Cognitive computing is recognized as the next era of computing. In order to make hardware and software systems more human-like, emotion artificial intelligence (AI) and cognitive AI which simulate human intelligence are the core of real AI. The current boom of sentiment analysis and affective computing in computer science gives rise to the rapid development of emotion AI. However, the research of cognitive AI has just started in the past few years. In this visionary paper, we briefly review the current development in emotion AI, introduce the concept of cognitive AI, and propose the envisioned future of cognitive AI, which intends to let computers think, reason, and make decisions in similar ways that humans do. The important aspect of cognitive AI in terms of engagement, regulation, decision making, and discovery are further discussed. Finally, we propose important directions for constructing future cognitive AI, including data and knowledge mining, multi-modal AI explainability, hybrid AI, and potential ethical challenges

    Can micro-expression be recognized based on single apex frame?

    No full text
    Abstract Micro-expressions are rapid and subtle facial movements such that they are difficult to detect and recognize. Most of recent works have attempted to recognize micro-expression by using the spatial and dynamic information from the video clip. Physiological studies have demonstrated that the apex frame can convey the most emotion expressed in facial expression. It may be reasonable to use apex frame for improving micro-expression recognition. However, it is wonder how much apex frames contribute to micro-expression recognition. In this paper, we primarily focus on resolving the contribution-level by using apex frame for micro-expression recognition. Firstly, we propose a new method to detect the apex frame in frequency domain, as it is found that apex frame has very correlated relationship with the amplitude change in frequency domain. Secondly, we propose to use deep convolutional neural network (DCNN) on apex frame to recognize micro-expression. Intensive experimental results on CASME II database shows that our method has achieved considerably improvement compared with the state-of-the-art methods in micro-expression recognition. These results also demonstrate that apex frame can express the major emotion in micro-expression

    Facial micro-expressions:an overview

    No full text
    Abstract Micro-expression (ME) is an involuntary, fleeting, and subtle facial expression. It may occur in high-stake situations when people attempt to conceal or suppress their true feelings. Therefore, MEs can provide essential clues to people’s true feelings and have plenty of potential applications, such as national security, clinical diagnosis, and interrogations. In recent years, ME analysis has gained much attention in various fields due to its practical importance, especially automatic ME analysis in computer vision as MEs are difficult to process by naked eyes. In this survey, we provide a comprehensive review of ME development in the field of computer vision, from the ME studies in psychology and early attempts in computer vision to various computational ME analysis methods and future directions. Four main tasks in ME analysis are specifically discussed, including ME spotting, ME recognition, ME action unit detection, and ME generation in terms of the approaches, advance developments, and challenges. Through this survey, readers can understand MEs in both aspects of psychology and computer vision, and apprehend the future research direction in ME analysis
    corecore